Review




Structured Review

SoftMax Inc dnn classifiers
Dnn Classifiers, supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/dnn classifiers/product/SoftMax Inc
Average 90 stars, based on 1 article reviews
dnn classifiers - by Bioz Stars, 2026-05
90/100 stars

Images



Similar Products

90
Kaggle Inc deep neural network (dnn) classifier
Literature on deep learning-based ASD detection.
Deep Neural Network (Dnn) Classifier, supplied by Kaggle Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/deep neural network (dnn) classifier/product/Kaggle Inc
Average 90 stars, based on 1 article reviews
deep neural network (dnn) classifier - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
SoftMax Inc dnn classifiers
Literature on deep learning-based ASD detection.
Dnn Classifiers, supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/dnn classifiers/product/SoftMax Inc
Average 90 stars, based on 1 article reviews
dnn classifiers - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
SoftMax Inc multitumor analyzer (mta-55) dnn with softmax classifier
Literature on deep learning-based ASD detection.
Multitumor Analyzer (Mta 55) Dnn With Softmax Classifier, supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/multitumor analyzer (mta-55) dnn with softmax classifier/product/SoftMax Inc
Average 90 stars, based on 1 article reviews
multitumor analyzer (mta-55) dnn with softmax classifier - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
Enron Corporation dnn-bi-lstm classifier
Literature on deep learning-based ASD detection.
Dnn Bi Lstm Classifier, supplied by Enron Corporation, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/dnn-bi-lstm classifier/product/Enron Corporation
Average 90 stars, based on 1 article reviews
dnn-bi-lstm classifier - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
SoftMax Inc ae-based dnn with softmax classifier
Literature on deep learning-based ASD detection.
Ae Based Dnn With Softmax Classifier, supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/ae-based dnn with softmax classifier/product/SoftMax Inc
Average 90 stars, based on 1 article reviews
ae-based dnn with softmax classifier - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
TSUMURA dnn classifier
Applications of counterfactual explanation in fMRI. The example illustrates an application of counterfactual explanation to a misclassification by a <t>DNN</t> <t>classifier.</t> (A) In this example, a DNN classifier incorrectly assigned EMOTION to a map of brain activation obtained in a MOTOR task. Because of the black-box nature of the DNN classifier, it is difficult to explain why the misclassification occurred. (B) A generative neural network for counterfactual brain activation (CAG) minimally transforms the real brain activation in (A) so that the DNN classifier now assigns MOTOR to the morphed activation (counterfactual activation). (C) Counterfactual explanation of misclassification in (A) can be obtained by taking the difference between the real activation and the counterfactual activation. In this example, the real brain activation would have been classified (correctly) as MOTOR if red (blue) brain regions in the counterfactual explanation had been more (less) active.
Dnn Classifier, supplied by TSUMURA, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/dnn classifier/product/TSUMURA
Average 90 stars, based on 1 article reviews
dnn classifier - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

Image Search Results


Literature on deep learning-based ASD detection.

Journal: PLOS One

Article Title: A deep learning-based ensemble for autism spectrum disorder diagnosis using facial images

doi: 10.1371/journal.pone.0321697

Figure Lengend Snippet: Literature on deep learning-based ASD detection.

Article Snippet: [ ] , Kaggle , Deep neural network (DNN) classifier , 91% , Limited dataset size and variability; requires validation on diverse populations..

Techniques: Biomarker Discovery

Accuracy’s comparison of proposed model and related studies.

Journal: PLOS One

Article Title: A deep learning-based ensemble for autism spectrum disorder diagnosis using facial images

doi: 10.1371/journal.pone.0321697

Figure Lengend Snippet: Accuracy’s comparison of proposed model and related studies.

Article Snippet: [ ] , Kaggle , Deep neural network (DNN) classifier , 91% , Limited dataset size and variability; requires validation on diverse populations..

Techniques: Comparison

Applications of counterfactual explanation in fMRI. The example illustrates an application of counterfactual explanation to a misclassification by a DNN classifier. (A) In this example, a DNN classifier incorrectly assigned EMOTION to a map of brain activation obtained in a MOTOR task. Because of the black-box nature of the DNN classifier, it is difficult to explain why the misclassification occurred. (B) A generative neural network for counterfactual brain activation (CAG) minimally transforms the real brain activation in (A) so that the DNN classifier now assigns MOTOR to the morphed activation (counterfactual activation). (C) Counterfactual explanation of misclassification in (A) can be obtained by taking the difference between the real activation and the counterfactual activation. In this example, the real brain activation would have been classified (correctly) as MOTOR if red (blue) brain regions in the counterfactual explanation had been more (less) active.

Journal: Frontiers in Neuroinformatics

Article Title: Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network

doi: 10.3389/fninf.2021.802938

Figure Lengend Snippet: Applications of counterfactual explanation in fMRI. The example illustrates an application of counterfactual explanation to a misclassification by a DNN classifier. (A) In this example, a DNN classifier incorrectly assigned EMOTION to a map of brain activation obtained in a MOTOR task. Because of the black-box nature of the DNN classifier, it is difficult to explain why the misclassification occurred. (B) A generative neural network for counterfactual brain activation (CAG) minimally transforms the real brain activation in (A) so that the DNN classifier now assigns MOTOR to the morphed activation (counterfactual activation). (C) Counterfactual explanation of misclassification in (A) can be obtained by taking the difference between the real activation and the counterfactual activation. In this example, the real brain activation would have been classified (correctly) as MOTOR if red (blue) brain regions in the counterfactual explanation had been more (less) active.

Article Snippet: The DNN classifier of brain activations used in this study was adapted from our previous study (Tsumura et al., ) ( ).

Techniques: Activation Assay

DNN classifier for brain activity decoding. (A) Following the standard procedure developed by HCP (Glasser et al., ), neocortex in the two hemispheres was mapped to two cortical sheets. Each neocortical activity image was mapped to the two sheets, which was then input to the DNN classifier (for details, see Tsumura et al., ). (B) Model architecture of the DNN classifier. The input was a picture containing two sheets of cortical activations. The picture was downsampled for later processing by the generative neural network. The DNN classifier was a deep convolutional network similar to the one described in our previous study (Tsumura et al., ). The output of the DNN classifier was one-hot vectors representing seven behavioral tasks in the HCP dataset. (C) Training history of the transfer learning. Test accuracy (blue) and validation accuracy (magenta) are shown for five replicates. Note that the chance level is 14.3% (1/7). (D) Profile of the classifier's decision (confusion matrix) in the validation set.

Journal: Frontiers in Neuroinformatics

Article Title: Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network

doi: 10.3389/fninf.2021.802938

Figure Lengend Snippet: DNN classifier for brain activity decoding. (A) Following the standard procedure developed by HCP (Glasser et al., ), neocortex in the two hemispheres was mapped to two cortical sheets. Each neocortical activity image was mapped to the two sheets, which was then input to the DNN classifier (for details, see Tsumura et al., ). (B) Model architecture of the DNN classifier. The input was a picture containing two sheets of cortical activations. The picture was downsampled for later processing by the generative neural network. The DNN classifier was a deep convolutional network similar to the one described in our previous study (Tsumura et al., ). The output of the DNN classifier was one-hot vectors representing seven behavioral tasks in the HCP dataset. (C) Training history of the transfer learning. Test accuracy (blue) and validation accuracy (magenta) are shown for five replicates. Note that the chance level is 14.3% (1/7). (D) Profile of the classifier's decision (confusion matrix) in the validation set.

Article Snippet: The DNN classifier of brain activations used in this study was adapted from our previous study (Tsumura et al., ) ( ).

Techniques: Activity Assay, Biomarker Discovery

Decision profile of  DNN classifier  on counterfactual activations.

Journal: Frontiers in Neuroinformatics

Article Title: Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network

doi: 10.3389/fninf.2021.802938

Figure Lengend Snippet: Decision profile of DNN classifier on counterfactual activations.

Article Snippet: The DNN classifier of brain activations used in this study was adapted from our previous study (Tsumura et al., ) ( ).

Techniques:

Counterfactual explanation of correct classification. (A) Schematics of the question asked in this analysis. In this example, the DNN classifier correctly assigned a “MOTOR” label to a real brain activation in the MOTOR task. Here, we want to interrogate this correct decision. Specifically, we ask a question “why did the classifier assign MOTOR instead of EMOTION?” (B–E) Examples of counterfactual explanation. (B) A population average map of real brain activation in the MOTOR task. (C) A population average map of counterfactual activation obtained by transforming the map in (A) to EMOTION. Transformation was conducted for each activation map and then averaged across the population. (D) Pixel-by-pixel subtraction of maps in (B) and (A) that serves as counterfactual explanation. This map explains why the map was classified as MOTOR but not EMOTION. (E) Simple difference between the average of real activations in the EMOTION and MOTOR tasks.

Journal: Frontiers in Neuroinformatics

Article Title: Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network

doi: 10.3389/fninf.2021.802938

Figure Lengend Snippet: Counterfactual explanation of correct classification. (A) Schematics of the question asked in this analysis. In this example, the DNN classifier correctly assigned a “MOTOR” label to a real brain activation in the MOTOR task. Here, we want to interrogate this correct decision. Specifically, we ask a question “why did the classifier assign MOTOR instead of EMOTION?” (B–E) Examples of counterfactual explanation. (B) A population average map of real brain activation in the MOTOR task. (C) A population average map of counterfactual activation obtained by transforming the map in (A) to EMOTION. Transformation was conducted for each activation map and then averaged across the population. (D) Pixel-by-pixel subtraction of maps in (B) and (A) that serves as counterfactual explanation. This map explains why the map was classified as MOTOR but not EMOTION. (E) Simple difference between the average of real activations in the EMOTION and MOTOR tasks.

Article Snippet: The DNN classifier of brain activations used in this study was adapted from our previous study (Tsumura et al., ) ( ).

Techniques: Activation Assay, Transformation Assay

Decision of  DNN classifier  on counterfactual activations obtained from correctly classified brain activations.

Journal: Frontiers in Neuroinformatics

Article Title: Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network

doi: 10.3389/fninf.2021.802938

Figure Lengend Snippet: Decision of DNN classifier on counterfactual activations obtained from correctly classified brain activations.

Article Snippet: The DNN classifier of brain activations used in this study was adapted from our previous study (Tsumura et al., ) ( ).

Techniques: Control

Counterfactual explanation of incorrect classification. (A) Schematics of the question asked in this analysis. In this example, the DNN classifier incorrectly assigned a “SOCIAL” label to a real brain activation in the EMOTION task. Here, we want to interrogate this incorrect decision. Specifically, we ask a question “why did the classifier (incorrectly) assign EMOTION instead of SOCIAL?” (B–E) Example of counterfactual explanation. (B) A single brain activation map for EMOTION that was incorrectly classified as SOCIAL by the DNN classifier. (C) A map of counterfactual activation obtained by transforming the map in (A) to SOCIAL. (D) Pixel-by-pixel subtraction of maps in (B) and (A) that serves as counterfactual explanation. This map explains why the map was incorrectly classified as SOCIAL but not EMOTION. (E) Simple difference between the average of real activations in the EMOTION and the single activation map for SOCIAL shown in (A) .

Journal: Frontiers in Neuroinformatics

Article Title: Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network

doi: 10.3389/fninf.2021.802938

Figure Lengend Snippet: Counterfactual explanation of incorrect classification. (A) Schematics of the question asked in this analysis. In this example, the DNN classifier incorrectly assigned a “SOCIAL” label to a real brain activation in the EMOTION task. Here, we want to interrogate this incorrect decision. Specifically, we ask a question “why did the classifier (incorrectly) assign EMOTION instead of SOCIAL?” (B–E) Example of counterfactual explanation. (B) A single brain activation map for EMOTION that was incorrectly classified as SOCIAL by the DNN classifier. (C) A map of counterfactual activation obtained by transforming the map in (A) to SOCIAL. (D) Pixel-by-pixel subtraction of maps in (B) and (A) that serves as counterfactual explanation. This map explains why the map was incorrectly classified as SOCIAL but not EMOTION. (E) Simple difference between the average of real activations in the EMOTION and the single activation map for SOCIAL shown in (A) .

Article Snippet: The DNN classifier of brain activations used in this study was adapted from our previous study (Tsumura et al., ) ( ).

Techniques: Activation Assay

Decision of the  DNN classifier  on counterfactual activations obtained from misclassified brain activations.

Journal: Frontiers in Neuroinformatics

Article Title: Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network

doi: 10.3389/fninf.2021.802938

Figure Lengend Snippet: Decision of the DNN classifier on counterfactual activations obtained from misclassified brain activations.

Article Snippet: The DNN classifier of brain activations used in this study was adapted from our previous study (Tsumura et al., ) ( ).

Techniques: Control

Counterfactual exaggeration of brain activation. (A) Schematic of counterfactual exaggeration. A brain activation (MOTOR task in this example) was iteratively transformed toward MOTOR by CAG. This iterative transformation accentuates (exaggerates) image features that biases the classifier decision toward MOTOR. (B–D) Example of counterfactual exaggeration. A brain activation in the EMOTION task (B) was iteratively transformed toward EMOTION eight times. Images after third (C) and eighth (D) transformations are shown. (E) Subtle image feature enhanced by counterfactual exaggeration was isolated by taking the difference of exaggerated images. In this example, differences between exaggerated images in (C) and (D) were calculated. The resulting difference image showed a texture-like pattern. (F) Example of texture-like feature extracted by counterfactual exaggeration (top). Bottom panel shows the texture-like patterns added to randomly chosen raw brain activations (middle). See also for another example. (G) Decisions of the DNN classifier to brain activations with texture-like patterns added. Each dot represents one example texture ( N = 12. Methods for details). Bar graph shows the mean and the standard deviation. The classifier was significantly biased toward the class of texture-like patterns (*, p < 0.001, Wilcoxon's sign rank test). Chance level was one of seven.

Journal: Frontiers in Neuroinformatics

Article Title: Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network

doi: 10.3389/fninf.2021.802938

Figure Lengend Snippet: Counterfactual exaggeration of brain activation. (A) Schematic of counterfactual exaggeration. A brain activation (MOTOR task in this example) was iteratively transformed toward MOTOR by CAG. This iterative transformation accentuates (exaggerates) image features that biases the classifier decision toward MOTOR. (B–D) Example of counterfactual exaggeration. A brain activation in the EMOTION task (B) was iteratively transformed toward EMOTION eight times. Images after third (C) and eighth (D) transformations are shown. (E) Subtle image feature enhanced by counterfactual exaggeration was isolated by taking the difference of exaggerated images. In this example, differences between exaggerated images in (C) and (D) were calculated. The resulting difference image showed a texture-like pattern. (F) Example of texture-like feature extracted by counterfactual exaggeration (top). Bottom panel shows the texture-like patterns added to randomly chosen raw brain activations (middle). See also for another example. (G) Decisions of the DNN classifier to brain activations with texture-like patterns added. Each dot represents one example texture ( N = 12. Methods for details). Bar graph shows the mean and the standard deviation. The classifier was significantly biased toward the class of texture-like patterns (*, p < 0.001, Wilcoxon's sign rank test). Chance level was one of seven.

Article Snippet: The DNN classifier of brain activations used in this study was adapted from our previous study (Tsumura et al., ) ( ).

Techniques: Activation Assay, Transformation Assay, Isolation, Standard Deviation